随着技术的快速进步,由于恶意软件活动的增加,安全性已成为一个主要问题,这对计算机系统和利益相关者的安全性和安全性构成了严重威胁。为了维持利益相关者,特别是最终用户的安全,保护数据免受欺诈性努力是最紧迫的问题之一。旨在破坏预期的计算机系统和程序或移动和Web应用程序的一组恶意编程代码,脚本,活动内容或侵入性软件称为恶意软件。根据一项研究,幼稚的用户无法区分恶意和良性应用程序。因此,应设计计算机系统和移动应用程序,以检测恶意活动以保护利益相关者。通过利用包括人工智能,机器学习和深度学习在内的新颖概念,可以使用许多算法来检测恶意软件活动。在这项研究中,我们强调了基于人工智能(AI)的技术来检测和防止恶意软件活动。我们详细介绍了当前的恶意软件检测技术,其缺点以及提高效率的方法。我们的研究表明,采用未来派的方法来开发恶意软件检测应用程序应具有很大的优势。对该综合的理解应帮助研究人员使用AI进行进一步研究恶意软件检测和预防。
translated by 谷歌翻译
Semi-supervised learning (SSL) has made significant strides in the field of remote sensing. Finding a large number of labeled datasets for SSL methods is uncommon, and manually labeling datasets is expensive and time-consuming. Furthermore, accurately identifying remote sensing satellite images is more complicated than it is for conventional images. Class-imbalanced datasets are another prevalent phenomenon, and models trained on these become biased towards the majority classes. This becomes a critical issue with an SSL model's subpar performance. We aim to address the issue of labeling unlabeled data and also solve the model bias problem due to imbalanced datasets while achieving better accuracy. To accomplish this, we create "artificial" labels and train a model to have reasonable accuracy. We iteratively redistribute the classes through resampling using a distribution alignment technique. We use a variety of class imbalanced satellite image datasets: EuroSAT, UCM, and WHU-RS19. On UCM balanced dataset, our method outperforms previous methods MSMatch and FixMatch by 1.21% and 0.6%, respectively. For imbalanced EuroSAT, our method outperforms MSMatch and FixMatch by 1.08% and 1%, respectively. Our approach significantly lessens the requirement for labeled data, consistently outperforms alternative approaches, and resolves the issue of model bias caused by class imbalance in datasets.
translated by 谷歌翻译
To ensure proper knowledge representation of the kitchen environment, it is vital for kitchen robots to recognize the states of the food items that are being cooked. Although the domain of object detection and recognition has been extensively studied, the task of object state classification has remained relatively unexplored. The high intra-class similarity of ingredients during different states of cooking makes the task even more challenging. Researchers have proposed adopting Deep Learning based strategies in recent times, however, they are yet to achieve high performance. In this study, we utilized the self-attention mechanism of the Vision Transformer (ViT) architecture for the Cooking State Recognition task. The proposed approach encapsulates the globally salient features from images, while also exploiting the weights learned from a larger dataset. This global attention allows the model to withstand the similarities between samples of different cooking objects, while the employment of transfer learning helps to overcome the lack of inductive bias by utilizing pretrained weights. To improve recognition accuracy, several augmentation techniques have been employed as well. Evaluation of our proposed framework on the `Cooking State Recognition Challenge Dataset' has achieved an accuracy of 94.3%, which significantly outperforms the state-of-the-art.
translated by 谷歌翻译
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.
translated by 谷歌翻译
尽管大量研究专门用于变形检测,但大多数研究都无法推广其在训练范式之外的变形面。此外,最近的变体检测方法非常容易受到对抗攻击的影响。在本文中,我们打算学习一个具有高概括的变体检测模型,以对各种形态攻击和对不同的对抗攻击的高度鲁棒性。为此,我们开发了卷积神经网络(CNN)和变压器模型的合奏,以同时受益于其能力。为了提高整体模型的鲁棒精度,我们采用多扰动对抗训练,并生成具有高可传递性的对抗性示例。我们详尽的评估表明,提出的强大合奏模型将概括为几个变形攻击和面部数据集。此外,我们验证了我们的稳健集成模型在超过最先进的研究的同时,对几次对抗性攻击获得了更好的鲁棒性。
translated by 谷歌翻译
道路建设项目维护运输基础设施。这些项目的范围从短期(例如,重新铺面或固定坑洼)到长期(例如,添加肩膀或建造桥梁)。传统上,确定下一个建设项目是什么以及安排什么何时进行安排,这是通过人类使用特殊设备的检查来完成的。这种方法是昂贵且难以扩展的。另一种选择是使用计算方法来整合和分析多种过去和现在的时空数据以预测未来道路构建的位置和时间。本文报告了这种方法,该方法使用基于深神经网络的模型来预测未来的结构。我们的模型在由构造,天气,地图和道路网络数据组成的异质数据集上应用卷积和经常性组件。我们还报告了如何通过构建一个名为“美国建设”的大型数据集来解决我们如何解决足够的公开数据,其中包括620万个道路构造案例,并通过各种时空属性和路线网络功能增强,收集了。在2016年至2021年之间的连续美国(美国)中。使用对美国几个主要城市进行广泛的实验,我们显示了工作在准确预测未来建筑时的适用性 - 平均F1得分为0.85,准确性为82.2% - 这是52.2% - 胜过基线。此外,我们展示了我们的培训管道如何解决数据的空间稀疏性。
translated by 谷歌翻译
在本文中,我们试图在抽象嵌入空间中绘制额叶和轮廓面图像之间的连接。我们使用耦合编码器网络利用此连接将额叶/配置文件的面部图像投影到一个常见的潜在嵌入空间中。提出的模型通过最大化面部两种视图之间的相互信息来迫使嵌入空间中表示的相似性。拟议的耦合编码器从三个贡献中受益于与极端姿势差异的匹配面。首先,我们利用我们的姿势意识到的对比学习来最大程度地提高身份额叶和概况表示之间的相互信息。其次,由在过去的迭代中积累的潜在表示组成的内存缓冲区已集成到模型中,因此它可以比小批量大小相对较多的实例。第三,一种新颖的姿势感知的对抗结构域适应方法迫使模型学习从轮廓到额叶表示的不对称映射。在我们的框架中,耦合编码器学会了扩大真实面孔和冒名顶替面部分布之间的边距,这导致了相同身份的不同观点之间的高度相互信息。通过对四个基准数据集的广泛实验,评估和消融研究来研究拟议模型的有效性,并与引人入胜的最新算法进行比较。
translated by 谷歌翻译
尽管对抗性和自然训练(AT和NT)之间有基本的区别,但在方法中,通常采用动量SGD(MSGD)进行外部优化。本文旨在通过研究AT中外部优化的忽视作用来分析此选择。我们的探索性评估表明,与NT相比,在诱导较高的梯度规范和方差。由于MSGD的收敛速率高度取决于梯度的方差,因此这种现象阻碍了AT的外部优化。为此,我们提出了一种称为ENGM的优化方法,该方法将每个输入示例对平均微型批次梯度的贡献进行正规化。我们证明ENGM的收敛速率与梯度的方差无关,因此适合AT。我们介绍了一种技巧,可以使用有关梯度范围W.R.T.规范的相关性的经验观察来降低ENGM的计算成本。网络参数和输入示例。我们对CIFAR-10,CIFAR-100和Tinyimagenet的广泛评估和消融研究表明,Engm及其变体一致地改善了广泛的AT方法的性能。此外,Engm减轻了AT的主要缺点,包括强大的过度拟合和对超参数设置的敏感性。
translated by 谷歌翻译
学习合适的全幻灯片图像(WSIS)表示有效检索系统是一项非平凡的任务。从当前方法中获得的WSI嵌入在欧几里得空间中并不理想有效的WSI检索。此外,由于同时处理多组贴片,因此大多数当前方法都需要高GPU存储器。为了应对这些挑战,我们提出了一个新颖的框架,用于利用深层生成建模和Fisher向量学习二进制和稀疏的WSI表示。我们引入了新的损失功能,以学习稀疏和二进制置换不变的WSI表示,采用基于实例的培训来提高记忆效率。在癌症基因组地图集(​​TCGA)和肝脏-Kidney-Stomach(LKS)数据集上验证了博学的WSI表示。在检索准确性和速度方面,该方法的表现优于Yottixel(最新的组织病理学图像搜索引擎)。此外,我们在公共基准LKS数据集中对SOTA实现了竞争性能,以进行WSI分类。
translated by 谷歌翻译
大型神经语言模型(NLMS)的域适应性在预审进阶段与大量非结构化数据结合在一起。但是,在这项研究中,我们表明,经过验证的NLMS从紧凑的数据子集中更有效,更快地学习内域信息,该数据集中在域中的关键信息上。我们使用抽象摘要和提取关键字的组合从非结构化数据构建这些紧凑的子集。特别是,我们依靠Bart生成抽象性摘要,而Keybert从这些摘要中提取关键字(或直接的原始非结构化文本)。我们使用六个不同的设置评估我们的方法:三个数据集与两个不同的NLMS结合使用。我们的结果表明,使用我们的方法在NLM上训练的特定任务分类器,使用我们的方法优于基于传统预处理的方法,即在整个数据上随机掩盖,以及无需审计的方法。此外,我们表明我们的策略将预处理的时间降低了五倍,而这是香草预处理的五倍。我们所有实验的代码均在https://github.com/shahriargolchin/compact-pretraining上公开获得。
translated by 谷歌翻译